60 research outputs found

    Unsupervised experience with temporal continuity of the visual environment is causally involved in the development of V1 complex cells

    Get PDF
    Unsupervised adaptation to the spatiotemporal statistics of visual experience is a key computational principle that has long been assumed to govern postnatal development of visual cortical tuning, including orientation selectivity of simple cells and position tolerance of complex cells in primary visual cortex (V1). Yet, causal empirical evidence supporting this hypothesis is scant. Here, we show that degrading the temporal continuity of visual experience during early postnatal life leads to a sizable reduction of the number of complex cells and to an impairment of their functional properties while fully sparing the development of simple cells. This causally implicates adaptation to the temporal structure of the visual input in the development of transformation tolerance but not of shape tuning, thus tightly constraining computational models of unsupervised cortical learning

    A machine learning framework to optimize optic nerve electrical stimulation for vision restoration

    Get PDF
    Optic nerve electrical stimulation is a promising technique to restore vision in blind subjects. Machine learning methods can be used to select effective stimulation protocols, but they require a model of the stimulated system to generate enough training data. Here, we use a convolutional neural network (CNN) as a model of the ventral visual stream. A genetic algorithm drives the activation of the units in a layer of the CNN representing a cortical region toward a desired pattern, by refining the activation imposed at a layer representing the optic nerve. To simulate the pattern of activation elicited by the sites of an electrode array, a simple point-source model was introduced and its optimization process was investigated for static and dynamic scenes. Psychophysical data confirm that our stimulation evolution framework produces results compatible with natural vision. Machine learning approaches could become a very powerful tool to optimize and personalize neuroprosthetic systems

    A template-matching algorithm for laminar identification of cortical recording sites from evoked response potentials

    Get PDF
    In recent years, the advent of the so-called silicon probes has made it possible to homogeneously sample spikes and local field potentials (LFPs) from a regular grid of cortical recording sites. In principle, this allows inferring the laminar location of the sites based on the spatiotemporal pattern of LFPs recorded along the probe, as in the well-known current source-density (CSD) analysis. This approach, however, has several limitations, since it relies on visual identification of landmark features (i.e., current sinks and sources) by human operators, features that can be absent from the CSD pattern if the probe does not span the whole cortical thickness, thus making manual labeling harder. Furthermore, as with any manual annotation procedure, the typical CSD-based workflow for laminar identification of recording sites is affected by subjective judgment undermining the consistency and reproducibility of results. To overcome these limitations, we developed an alternative approach, based on finding the optimal match between the LFPs recorded along a probe in a given experiment and a template LFP profile that was computed using 18 recording sessions, in which the depth of the recording sites had been recovered through histology. We show that this method can achieve an accuracy of 79 \u3bcm in recovering the cortical depth of recording sites and a 76% accuracy in inferring their laminar location. As such, our approach provides an alternative to CSD that, being fully automated, is less prone to the idiosyncrasies of subjective judgment and works reliably also for recordings spanning a limited cortical stretch. NEW & NOTEWORTHY Knowing the depth and laminar location of the microelectrodes used to record neuronal activity from the cerebral cortex is crucial to properly interpret the recorded patterns of neuronal responses. Here, we present an innovative approach that allows inferring such properties with high accuracy and in an automated way (i.e., without the need of visual inspection and manual annotation) from the evoked response potentials elicited by sensory (e.g., visual) stimuli

    Rats spontaneously perceive global motion direction of drifting plaids

    Get PDF
    Computing global motion direction of extended visual objects is a hallmark of primate high-level vision. Although neurons selective for global motion have also been found in mouse visual cortex, it remains unknown whether rodents can combine multiple motion signals into global, integrated percepts. To address this question, we trained two groups of rats to discriminate either gratings (G group) or plaids (i.e., superpositions of gratings with different orientations; P group) drifting horizontally along opposite directions. After the animals learned the task, we applied a visual priming paradigm, where presentation of the target stimulus was preceded by the brief presentation of either a grating or a plaid. The extent to which rat responses to the targets were biased by such prime stimuli provided a measure of the spontaneous, perceived similarity between primes and targets. We found that gratings and plaids, when uses as primes, were equally effective at biasing the perception of plaid direction for the rats of the P group. Conversely, for G group, only the gratings acted as effective prime stimuli, while the plaids failed to alter the perception of grating direction. To interpret these observations, we simulated a decision neuron reading out the representations of gratings and plaids, as conveyed by populations of either component or pattern cells (i.e., local or global motion detectors). We concluded that the findings for the P group are highly consistent with the existence of a population of pattern cells, playing a functional role similar to that demonstrated in primates. We also explored different scenarios that could explain the failure of the plaid stimuli to elicit a sizable priming magnitude for the G group. These simulations yielded testable predictions about the properties of motion representations in rodent visual cortex at the single-cell and circuitry level, thus paving the way to future neurophysiology experiments

    A Self-Calibrating, Camera-Based Eye Tracker for the Recording of Rodent Eye Movements

    Get PDF
    Much of neurophysiology and vision science relies on careful measurement of a human or animal subject's gaze direction. Video-based eye trackers have emerged as an especially popular option for gaze tracking, because they are easy to use and are completely non-invasive. However, video eye trackers typically require a calibration procedure in which the subject must look at a series of points at known gaze angles. While it is possible to rely on innate orienting behaviors for calibration in some non-human species, other species, such as rodents, do not reliably saccade to visual targets, making this form of calibration impossible. To overcome this problem, we developed a fully automated infrared video eye-tracking system that is able to quickly and accurately calibrate itself without requiring co-operation from the subject. This technique relies on the optical geometry of the cornea and uses computer-controlled motorized stages to rapidly estimate the geometry of the eye relative to the camera. The accuracy and precision of our system was carefully measured using an artificial eye, and its capability to monitor the gaze of rodents was verified by tracking spontaneous saccades and evoked oculomotor reflexes in head-fixed rats (in both cases, we obtained measurements that are consistent with those found in the literature). Overall, given its fully automated nature and its intrinsic robustness against operator errors, we believe that our eye-tracking system enhances the utility of existing approaches to gaze-tracking in rodents and represents a valid tool for rodent vision studies

    What response properties do individual neurons need to underlie position and clutter “invariant” object recognition?

    Get PDF
    http://jn.physiology.org/content/102/1/360.abstractPrimates can easily identify visual objects over large changes in retinal position—a property commonly referred to as position “invariance.” This ability is widely assumed to depend on neurons in inferior temporal cortex (IT) that can respond selectively to isolated visual objects over similarly large ranges of retinal position. However, in the real world, objects rarely appear in isolation, and the interplay between position invariance and the representation of multiple objects (i.e., clutter) remains unresolved. At the heart of this issue is the intuition that the representations of nearby objects can interfere with one another and that the large receptive fields needed for position invariance can exacerbate this problem by increasing the range over which interference acts. Indeed, most IT neurons' responses are strongly affected by the presence of clutter. While external mechanisms (such as attention) are often invoked as a way out of the problem, we show (using recorded neuronal data and simulations) that the intrinsic properties of IT population responses, by themselves, can support object recognition in the face of limited clutter. Furthermore, we carried out extensive simulations of hypothetical neuronal populations to identify the essential individual-neuron ingredients of a good population representation. These simulations show that the crucial neuronal property to support recognition in clutter is not preservation of response magnitude, but preservation of each neuron's rank-order object preference under identity-preserving image transformations (e.g., clutter). Because IT neuronal responses often exhibit that response property, while neurons in earlier visual areas (e.g., V1) do not, we suggest that preserving the rank-order object preference regardless of clutter, rather than the response magnitude, more precisely describes the goal of individual neurons at the top of the ventral visual stream.National Eye Institute (Grant R01-EY-014970)Pew Charitable TrustsMcKnight FoundationNational Eye Institute (NEI Integrative Training Grant for Vision)National Defense Science and Engineering Graduate FellowshipCharles A. King Trust Postdoctoral Fellowship ProgramCompagnia di San Paolo (Foundation

    Intrinsic dimension of data representations in deep neural networks

    Get PDF
    Deep neural networks progressively transform their inputs across multiple processing layers. What are the geometrical properties of the representations learned by these networks? Here we study the intrinsic dimensionality (ID) of data-representations, i.e. the minimal number of parameters needed to describe a representation. We find that, in a trained network, the ID is orders of magnitude smaller than the number of units in each layer. Across layers, the ID first increases and then progressively decreases in the final layers. Remarkably, the ID of the last hidden layer predicts classification accuracy on the test set. These results can neither be found by linear dimensionality estimates (e.g., with principal component analysis), nor in representations that had been artificially linearized. They are neither found in untrained networks, nor in networks that are trained on randomized labels. This suggests that neural networks that can generalize are those that transform the data into low-dimensional, but not necessarily flat manifolds

    Rat sensitivity to multipoint statistics is predicted by efficient coding of natural scenes

    Get PDF
    Efficient processing of sensory data requires adapting the neuronal encoding strategy to the statistics of natural stimuli. Previously, in Hermundstad et al., 2014, we showed that local multipoint correlation patterns that are most variable in natural images are also the most percep-tually salient for human observers, in a way that is compatible with the efficient coding principle. Understanding the neuronal mechanisms underlying such adaptation to image statistics will require performing invasive experiments that are impossible in humans. Therefore, it is important to under-stand whether a similar phenomenon can be detected in animal species that allow for powerful experimental manipulations, such as rodents. Here we selected four image statistics (from single-to four-point correlations) and trained four groups of rats to discriminate between white noise patterns and binary textures containing variable intensity levels of one of such statistics. We interpreted the resulting psychometric data with an ideal observer model, finding a sharp decrease in sensitivity from two-to four-point correlations and a further decrease from four-to three-point. This ranking fully reproduces the trend we previously observed in humans, thus extending a direct demonstration of efficient coding to a species where neuronal and developmental processes can be interrogated and causally manipulated

    Editorial: Sensory Adaptation

    Get PDF
    • …
    corecore